Brain midline shift (MLS) is one of the most critical factors to be considered for clinical diagnosis and treatment decision-making for intracranial hemorrhage. Existing computational methods on MLS quantification not only require intensive labeling in millimeter-level measurement but also suffer from poor performance due to their dependence on specific landmarks or simplified anatomical assumptions. In this paper, we propose a novel semi-supervised framework to accurately measure the scale of MLS from head CT scans. We formulate the MLS measurement task as a deformation estimation problem and solve it using a few MLS slices with sparse labels. Meanwhile, with the help of diffusion models, we are able to use a great number of unlabeled MLS data and 2793 non-MLS cases for representation learning and regularization. The extracted representation reflects how the image is different from a non-MLS image and regularization serves an important role in the sparse-to-dense refinement of the deformation field. Our experiment on a real clinical brain hemorrhage dataset has achieved state-of-the-art performance and can generate interpretable deformation fields.
translated by 谷歌翻译
Most deep-learning-based continuous sign language recognition (CSLR) models share a similar backbone consisting of a visual module, a sequential module, and an alignment module. However, due to limited training samples, a connectionist temporal classification loss may not train such CSLR backbones sufficiently. In this work, we propose three auxiliary tasks to enhance the CSLR backbones. The first task enhances the visual module, which is sensitive to the insufficient training problem, from the perspective of consistency. Specifically, since the information of sign languages is mainly included in signers' facial expressions and hand movements, a keypoint-guided spatial attention module is developed to enforce the visual module to focus on informative regions, i.e., spatial attention consistency. Second, noticing that both the output features of the visual and sequential modules represent the same sentence, to better exploit the backbone's power, a sentence embedding consistency constraint is imposed between the visual and sequential modules to enhance the representation power of both features. We name the CSLR model trained with the above auxiliary tasks as consistency-enhanced CSLR, which performs well on signer-dependent datasets in which all signers appear during both training and testing. To make it more robust for the signer-independent setting, a signer removal module based on feature disentanglement is further proposed to remove signer information from the backbone. Extensive ablation studies are conducted to validate the effectiveness of these auxiliary tasks. More remarkably, with a transformer-based backbone, our model achieves state-of-the-art or competitive performance on five benchmarks, PHOENIX-2014, PHOENIX-2014-T, PHOENIX-2014-SI, CSL, and CSL-Daily.
translated by 谷歌翻译
Optimization equips engineers and scientists in a variety of fields with the ability to transcribe their problems into a generic formulation and receive optimal solutions with relative ease. Industries ranging from aerospace to robotics continue to benefit from advancements in optimization theory and the associated algorithmic developments. Nowadays, optimization is used in real time on autonomous systems acting in safety critical situations, such as self-driving vehicles. It has become increasingly more important to produce robust solutions by incorporating uncertainty into optimization programs. This paper provides a short survey about the state of the art in optimization under uncertainty. The paper begins with a brief overview of the main classes of optimization without uncertainty. The rest of the paper focuses on the different methods for handling both aleatoric and epistemic uncertainty. Many of the applications discussed in this paper are within the domain of control. The goal of this survey paper is to briefly touch upon the state of the art in a variety of different methods and refer the reader to other literature for more in-depth treatments of the topics discussed here.
translated by 谷歌翻译
Bayesian Optimization is a useful tool for experiment design. Unfortunately, the classical, sequential setting of Bayesian Optimization does not translate well into laboratory experiments, for instance battery design, where measurements may come from different sources and their evaluations may require significant waiting times. Multi-fidelity Bayesian Optimization addresses the setting with measurements from different sources. Asynchronous batch Bayesian Optimization provides a framework to select new experiments before the results of the prior experiments are revealed. This paper proposes an algorithm combining multi-fidelity and asynchronous batch methods. We empirically study the algorithm behavior, and show it can outperform single-fidelity batch methods and multi-fidelity sequential methods. As an application, we consider designing electrode materials for optimal performance in pouch cells using experiments with coin cells to approximate battery performance.
translated by 谷歌翻译
制作对抗性攻击的大多数方法都集中在具有单个主体对象的场景上(例如,来自Imagenet的图像)。另一方面,自然场景包括多个在语义上相关的主要对象。因此,探索设计攻击策略至关重要,这些攻击策略超出了在单对象场景上学习或攻击单对象受害者分类器。由于其固有的属性将扰动向未知模型的强大可传递性强,因此本文介绍了使用生成模型对多对象场景的对抗性攻击的第一种方法。为了代表输入场景中不同对象之间的关系,我们利用开源的预训练的视觉语言模型剪辑(对比语言图像 - 预训练),并动机利用语言中的编码语义来利用编码的语义空间与视觉空间一起。我们称这种攻击方法生成对抗性多对象场景攻击(GAMA)。 GAMA展示了剪辑模型作为攻击者的工具的实用性,以训练可强大的扰动发电机为多对象场景。使用联合图像文本功能来训练发电机,我们表明GAMA可以在各种攻击环境中制作有效的可转移扰动,以欺骗受害者分类器。例如,GAMA触发的错误分类比在黑框设置中的最新生成方法高出约16%,在黑框设置中,分类器体系结构和攻击者的数据分布都与受害者不同。我们的代码将很快公开提供。
translated by 谷歌翻译
扩散模型显示出令人难以置信的能力作为生成模型。实际上,它们为文本条件形成的图像生成(例如Imagen和dall-e2)提供了当前最新模型的启动基于观点。我们首先推导了变异扩散模型(VDM)作为马尔可夫分层变异自动编码器的特殊情况,其中三个关键假设可实现ELBO的可拖动计算和可扩展的优化。然后,我们证明,优化VDM归结为学习神经网络以预测三个潜在目标之一:来自任何任意噪声的原始源输入,任何任意噪声输入的原始源噪声或噪声的得分函数输入任何任意噪声水平。然后,我们更深入地研究学习分数函数的含义,并将扩散模型的变异透视图与通过Tweedie的公式明确地与基于得分的生成建模的角度联系起来。最后,我们涵盖了如何通过指导使用扩散模型学习条件分布的方法。
translated by 谷歌翻译
本文考虑了最佳功率流(OPF)的优化代理,即近似于OPF的输入/输出关系的机器学习模型。最近的工作重点是表明此类代理可能具有高忠诚。但是,他们的培训需要大量数据,每个实例都需要(离线)解决输入分布样本的OPF。为了满足市场清除应用程序的要求,本文提出了积极的桶装采样(ABS),这是一个新型的活跃学习框架,旨在培训在一个时间限制内培训最佳OPF代理。ABS将输入分布分配到存储桶中,并使用采集函数来确定接下来的何处。它依靠自适应学习率,随着时间的推移会增加和降低。实验结果证明了ABS的好处。
translated by 谷歌翻译
对基质乘法的快速近似可能会大大降低神经网络推断的成本。关于近似矩阵乘法的最新工作建议通过拟合训练数据的快速哈希功能来代替桌面外观的昂贵乘法。在这项工作中,我们提出了对以前的工作的改进,该工作针对深度学习推理,在该设置中,人们可以同时访问培训数据和固定(已经学习过的)模型重量矩阵。我们进一步提出了一个微调程序,以加速整个神经网络,同时最大程度地减少准确性损失。最后,我们在简单的图像分类任务上分析了提出的方法。尽管我们显示了先前工作的改进,但与精确的矩阵乘法相比,总体分类精度仍然大大降低。尽管取得了负面的结果,但我们的工作还是为未来的努力指向以快速的非线性哈希方法加速内部产品的道路。
translated by 谷歌翻译
Tree Ensembles可以非常适合黑盒优化任务,例如算法调整和神经体系结构搜索,因为它们在几乎没有手动调整的情况下实现了良好的预测性能,自然可以处理离散的功能空间,并且对培训中的异常值相对不敏感数据。在使用树的组合进行黑盒优化方面面临的两个众所周知的挑战是(i)有效地量化模型的不确定性,以进行探索,以及(ii)优化在零件的恒定采集函数上。为了同时解决这两个点,我们建议在获得模型方差估计之前使用树的内核解释为高斯过程,并为采集函数开发兼容的优化公式。后者进一步使我们能够通过考虑工程设置中的域知识和建模搜索空间对称性,例如神经体系结构搜索中的层次结构关系,从而无缝整合已知约束,以提高采样效率。我们的框架以及最先进的方法以及对连续/离散功能的不受限制的黑框优化,并且优于结合混合变量特征空间和已知输入约束的问题的竞争方法。
translated by 谷歌翻译
长期以来,面部表达分析一直是计算机视觉的积极研究领域。传统方法主要分析原型离散情绪的图像;结果,它们不能准确描述人类复杂的情绪状态。此外,在可见光光谱中,照明方差仍然是面部分析的挑战。为了解决这些问题,我们建议使用基于价和唤醒的维数模型,以代表更广泛的情绪,并结合近红外(NIR)图像,这对于照明变化更为强大。由于没有现有的NIR面部表达数据集具有价值标签,因此我们提供两种互补的数据增强方法(面部变形和自行车方法),可以创建具有来自现有的分类和/或可见光光数据集的尺寸情感标签的NIR Image数据集。我们的实验表明,就数据质量和基线预测性能而言,这些生成的NIR数据集与现有数据集相当。
translated by 谷歌翻译